Goto

Collaborating Authors

 assumption 9




A Guide Through the Zoo of Biased SGD

Neural Information Processing Systems

We also provide examples where biased estimators outperform their unbiased counterparts or where unbiased versions are simply not available. Finally, we demonstrate the effectiveness of our framework through experimental results that validate our theoretical findings.


A Baseline algorithms

Neural Information Processing Systems

The following theorem is a more general version of Theorem 5.1. Assume that Assumptions 1 to 3 hold. Note that the only difference between Theorem B.1 and Theorem 5.1 lies in That is, the "oldest" response used to update By Jensen's inequality and L -smoothness, we have null f In order for the paper to be self-contained, we restate the proof here. The following lemma is slightly modified from Lemma 8 in [18]. By Lemma B.1, we have B Combining Appendix B.3.1 and Appendix B.3.2, we have B.4 Deriving the convergence bound In this subsection, we obtain Theorem B.1 based on the descent lemma.





BC-ADMM: An Efficient Non-convex Constrained Optimizer with Robotic Applications

Pan, Zherong, Wu, Kui

arXiv.org Artificial Intelligence

Non-convex constrained optimizations are ubiquitous in robotic applications such as multi-agent navigation, UAV trajectory optimization, and soft robot simulation. For this problem class, conventional optimizers suffer from small step sizes and slow convergence. We propose BC-ADMM, a variant of Alternating Direction Method of Multiplier (ADMM), that can solve a class of non-convex constrained optimizations with biconvex constraint relaxation. Our algorithm allows larger step sizes by breaking the problem into small-scale sub-problems that can be easily solved in parallel. We show that our method has both theoretical convergence speed guarantees and practical convergence guarantees in the asymptotic sense. Through numerical experiments in a row of four robotic applications, we show that BC-ADMM has faster convergence than conventional gradient descent and Newton's method in terms of wall clock time.


Mean-Field Generalisation Bounds for Learning Controls in Stochastic Environments

Baros, Boris, Cohen, Samuel N., Reisinger, Christoph

arXiv.org Machine Learning

When solving stochastic control problems, one is often limited by the challenge of specifying realistic model dynamics of the involved processes. Parametric approaches to estimating dynamics introduce model error, while'model-free' approaches typically suffer from extreme curse of dimensionality constraints. The development of reliable machine-learning based methods for stochastic control is therefore of significant practical interest. In this paper, we focus on problems where a decision maker faces a stochastic environment, that is, where they interact with a system with unknown and uncontrolled stochastic dynamics, which, together with their control, induce a controlled state process and costs. Examples of this include optimal investment for a small investor - here the stochastic dynamics of assets are uncontrolled and unknown, the investor chooses a strategy based on past observations, and together these generate a wealth process which must be optimised. A second example is aerial navigation in the presence of uncertain weather - the weather is unaffected by the navigation policy chosen, while the navigator must account for uncertainties in their planning, and the resulting flight-plan needs to be optimised. In both these cases, the stochastic environment is naturally high-dimensional and may not be Markovian, and so is challenging to model statistically using finitely many observations. We consider the setting where we have access to a finite number of i.i.d.